Skip to content

Conversation

@GuoRen868
Copy link
Contributor

@GuoRen868 GuoRen868 commented Nov 12, 2025

What this PR does / why we need it?

add custom opapi DispatchGmmCombineDecode for A3, include kernel inpl, python Api, pytest.

vLLM version: v0.11.0
vLLM main: vllm-project/vllm@24d6314

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new custom operator DispatchGmmCombineDecode for the Ascend platform. The changes include the operator definition, kernel implementation, build scripts, and PyTorch bindings. My review has identified a few critical issues. There is a significant issue in the shell script csrc/build_aclnn.sh regarding environment variable setup which could cause silent failures. Another critical bug is in csrc/pytorch_npu_helper.hpp where tensor strides are calculated incorrectly, which will fail for non-contiguous tensors. Additionally, there's a confusing duplicated field in csrc/custom_ops/kernels/dispatch_gmm_combine_decode/op_kernel/dispatch_gmm_combine_decode_tiling.h that should be corrected to improve maintainability.


# install custom ops
./build_out/custom_ops/run/CANN_ascend910_93_ubuntu_aarch64.run --install-path=/usr/local/Ascend/ascend-toolkit/latest/opp/
source /usr/local/Ascend/ascend-toolkit/latest/opp/vendors/customize/bin/set_env.bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The source command on this line will only affect the environment of the script's execution shell. When this script is executed, it runs in a sub-shell, and any environment variables set within it are lost when the script finishes. If the intention is to modify the environment of the calling shell, this script should be sourced (e.g., source csrc/build_aclnn.sh) rather than executed. The #!/bin/bash shebang is misleading if the script is meant to be sourced. This can lead to silent failures in the environment setup.

Comment on lines 222 to 237

// 适配dispatch_gmm_combine_decode算子的weight入参
if (acl_data_type == ACL_INT8 && dimNum == 3) {
format = ACL_FORMAT_FRACTAL_NZ;
}

auto acl_tensor =
aclCreateTensor(at_tensor.sizes().data(), at_tensor.sizes().size(), acl_data_type, strides.data(),
0, format, at_tensor.sizes().data(), at_tensor.sizes().size(),
const_cast<void *>(at_tensor.storage().data()));

return acl_tensor;
}

inline aclScalar *ConvertType(const at::Scalar &at_scalar)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The calculation of tensor strides is incorrect as it assumes the tensor is contiguous. This will lead to incorrect memory access and data corruption for non-contiguous tensors. You should use the tensor's actual strides and storage offset provided by PyTorch via at_tensor.strides() and at_tensor.storage_offset().

    const auto dimNum = at_tensor.dim();
    aclFormat format = ACL_FORMAT_ND;

    // 适配dispatch_gmm_combine_decode算子的weight入参
    if (acl_data_type == ACL_INT8 && dimNum == 3) {
        format = ACL_FORMAT_FRACTAL_NZ;
    }

    auto acl_tensor =
        aclCreateTensor(at_tensor.sizes().data(), dimNum, acl_data_type, at_tensor.strides().data(),
                        at_tensor.storage_offset(), format, at_tensor.sizes().data(), dimNum,
                        const_cast<void *>(at_tensor.storage().data()));

Comment on lines 27 to 29
uint32_t aicNum; // aivNum
uint32_t aivNum; // aivNum
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a duplicated field aivNum and a confusing comment. The struct DispatchGmmCombineDecodeInfo has aicNum with comment // aivNum and aivNum with comment // aivNum. This is likely a copy-paste error and can lead to confusion and bugs. Please clarify the purpose of each field and correct the comments. For example, aicNum should probably be for AI Core count and aivNum for AI Vector count.

Suggested change
uint32_t aicNum; // aivNum
uint32_t aivNum; // aivNum
uint32_t aicNum; // aicNum
uint32_t aivNum; // aivNum

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link

github-actions bot commented Dec 1, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link

github-actions bot commented Dec 1, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link

github-actions bot commented Dec 3, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

- name: glm-4-5
os: linux-aarch64-a3-16
tests: tests/e2e/nightly/models/test_glm4_5.py
- name: custom-op-dispatch_gmm_combine_decode
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this file is for model only, no need to add it here. @Potabk

Copy link
Collaborator

@Potabk Potabk Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can add a new job for the ops at the end of https://github.com/vllm-project/vllm-ascend/blob/main/.github/workflows/vllm_ascend_test_nightly_a3.yaml , That's enough for your PR; I'll do a more detailed restructuring later.

  custom-ops-tests:
    name: test ops
    if: always() && (github.event_name == 'schedule' || github.event_name == 'workflow_dispatch')
    needs: multi-node-tests
    strategy:
      fail-fast: false
      matrix:
        test_config:
          - name: custom-op-dispatch_gmm_combine_decode
            os: linux-aarch64-a3-16
            tests: tests/e2e/nightly/multicard_ops/test_dispatch_gmm_combine_decode.py
    uses: ./.github/workflows/_e2e_nightly_single_node.yaml
    with:
      runner: ${{ matrix.test_config.os }}
      vllm: 0.12.0
      image: 'swr.cn-southwest-2.myhuaweicloud.com/base_image/ascend-ci/vllm-ascend:nightly-a3'
      tests: ${{ matrix.test_config.tests }}
      name: ${{ matrix.test_config.name }}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@wangqiankun13 wangqiankun13 force-pushed the fused_pr branch 2 times, most recently from 70968e0 to c15496e Compare December 4, 2025 11:44
@github-actions
Copy link

github-actions bot commented Dec 4, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@wangqiankun13 wangqiankun13 force-pushed the fused_pr branch 3 times, most recently from a442a76 to be7b444 Compare December 5, 2025 02:55
@wangqiankun13 wangqiankun13 force-pushed the fused_pr branch 2 times, most recently from 124c358 to cd8422a Compare December 5, 2025 04:53
@github-actions github-actions bot added the documentation Improvements or additions to documentation label Dec 5, 2025
@wangqiankun13 wangqiankun13 force-pushed the fused_pr branch 2 times, most recently from cb8f30f to fafa08e Compare December 5, 2025 07:21
@wangxiyuan wangxiyuan merged commit 4bd1030 into vllm-project:main Dec 6, 2025
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation module:tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants